|
|
ingo wrote:
> What was your perception of the process when you started?
To define points in space from which sound can be emitted and which can be
moved around continuesly.
> I would think of two things, "tracks" and "time-line". Tracks would be
> what you call sound_point, think of multi track recording. Every
> object has its own track.
>
> The time-line tells us what happens when. In your current macro it
> seems that you define start and end time, so you have to know the
> length of the whole sequence on beforehand. I'd prefer the data on
> the time-line to be independent of the actual length of the sequence,
> so one does not have to change values if one does a "10 frame test
> run".
>
> Track("ship1", position, trigger)
What is "trigger" here?
> It gives you maximum
> flexibility in case you wan't to change something later on, like gain
> or pitch. These are properties of the sound not of the object.
Gain and pitch are properties of the object/source, not the sound. This
means that you can play the same sound at several sources at the same time
with different gain/pitch values for each source. If it was a property of
the sound, then you couldn't adjust it on a per-object basis, which would be
a disadvantage. I think this logic makes sense, and it's the logic used by
OpenAL itself anyway. I should mention that one source can only play back
one sound at a time - otherwise per-object gain/pitch would indeed not have
made sense.
>
> // name sound_filename time
> sound_point_loop("ball","wavdata/bump.wav",1000*3.5)
>
> This one could become:
> Track("ball", position, trigger)
How is the time where the sound should be played specified? It's
insufficient to base it on the frame number, as you might want a sound to
start playing at a time value that lies between two frames.
> The Java program should know the relation ball -> bump.wav
I disagree here. All you need to do with my Java program is to specify the
data file to use as input, and then it returns a .wav file. To have to
provide additional input to the Java program would only be more cumbersome
and time-consuming. I like that you can control everything from one place.
, not
> POV-Ray. The Java script should know the sound-properties of the
> ball-object, like should I start the sound one frame, or n
> milliseconds, before or after the impact. Should it still have a
> sound some time after it bounced, etc. Also you could do some extra
> calculations in POV-Ray to determine wether the trigger should switch
> from off to on.
I'm not sure what you mean here, but I also don't see how the Java program
would know all these things.
> So you may have to write two scripts, a POV-script and a sound script.
As it is now, you can have the sound and scene action completely integrated
but you can also have them seperated if you like. I prefer it that way.
Thanks for your thoughts. I hope you can elaborate a bit on the parts I
didn't get.
Rune
--
3D images and anims, include files, tutorials and more:
rune|vision: http://runevision.com
POV-Ray Ring: http://webring.povray.co.uk
Post a reply to this message
|
|